The Case to be Made for AI Etiquette
The guy I down-nod at the gym every day did it reluctantly this morning, and some ancient part of my brain thought I might need to fight him. Instead, I went home and asked ChatGPT what it meant. From the nuances of the situation and the brief history of interactions I fed it, I hoped to derive some unique insight. Something I might have missed from the terabytes of copyrighted materials it might have been trained on.
I might have consulted a human for their opinion, but it would have required overcoming my solitude-induced inertia. And for what? The LLM gave a satisfactory answer. Besides, society is trending in that direction: everything is efficient and purposeful. We have sugar-free Coke, alcohol-free beer, and content replacing art. But a question arises. When does optimizing something undermine its essence? The recurring metaphor that comes to mind is sharpening wooden pencils until they are too short to use.
Scrolling around OpenAI’s website, I found an interesting statistic.
About half (49%) of ChatGPT messages are from people “Asking,” a growing, high-rated category that shows people value ChatGPT more as an advisor than as a task-completer.
As LLMs enter our daily lives as advisors, they replace what community, religion, and culture once provided. The cost of actual community is accommodation— letting go of individual boundaries for something beyond ourselves. LLMs eliminate that cost. What do we lose in not paying that cost?
The first thing that comes without community is the absence of rituals. For example, in the school where I received an education, teachers would insist that we stand and greet them with 'good evening' and 'good afternoon'. In order to be corrected, work must be written in pen and in cursive. Notebooks must be separated by subject. If teachers were purely transmitters of knowledge, they would seem like bad, crabby products. Now that I am using a large language model to teach and advise me on things I am curious about, I am unsure if this is the best way to go.
Personally, I found all of the formality needlessly excessive. But they did do one thing well — establishing order. In the rituals of greeting and addressing each other, there is a clear delineation between teacher and student. An agreement on who we are in relation to each other, and whose truth would be of greater consideration. Formalities serve as an important basis for human beings not to constantly fight over whose truth takes greater precedence. In the way I ask LLMs to explain concepts, the caps lock I use when I am frustrated, and the way it apologizes and changes its answer, truth becomes relative. The problem of truth becoming relative is reinforced by broader cultural shifts, marked perhaps by a loneliness epidemic, the loss of community, and the disappearance of third spaces.
The more culture turns inward—pursuing authenticity, self-expression, personal truth—the more formalities seem fake or oppressive. Why follow a script? Why use others' words? Why act in ways that don’t reflect our inner state? A rejection of these forms requires us to generate and impose our own meaning on each other, and conflict arises when we naturally misunderstand nuances in these meanings.
For example, in the old form, on a date, the person who asks pays, or the man pays, or you split. Everyone knows the script. Now, rejecting this as outdated, both reach for the check awkwardly. Each must guess: if I pay, am I being generous or patronizing? If I don't, am I cheap or feminist? Does their offer to split mean they're not interested? A five-second transaction becomes a test of values neither agreed to take. In the absence of forms, we are handicapped in navigating each other’s relative truths- some of which are often extreme, owing to personality, surroundings, and genetics.
So there is a shift in how cohesive we are because of our personal truths. LLMs make this shift worse. The problem with AI advisors is that they're often optimized for user satisfaction—not truth. Humans favor information that confirms what we already believe. We want to reason our way toward comfortable conclusions to create a simpler view of the world. When AI is trained using Reinforcement Learning with Human Feedback (RLHF), these tendencies get baked in —-
‘Chatbots can come to encode human cognitive biases and preferences that are implicitly expressed in human responses during post-training. ’
This shouldn’t be a surprise. They are trained by humans; why won’t their biases be baked in? The result: LLMs are too agreeable and will reinforce your beliefs and thinking.
There is a subconscious act of engaging with it as though it were human, I say please after all, so do millions of users, (or the conscious act of engaging with it — marked by the rise in AI girlfriends and companions), so we lack the foresight to truly challenge answers, as though we would from say, our own thinking. Through this, the sycophantic nature of AI becomes a fresh breeding ground for delusion and psychosis.
The lack of friction in using tech isn’t a new discussion. But I’d argue, this time it is different. This time, it isn’t about making it harder to broadcast messages to large audiences on X, curbing autoplay on YouTube to reduce exposure to extreme content, or fighting misinformation on WhatsApp. These were all platform features that could be tweaked. It's that this time, AI has no shape of its own. It's designed to conform entirely to you, which means you cannot really fix formlessness; it is the product. (quite an overlap to annoying algorithms, but I’d still say AI being formless is more implicit)
Well, users could theoretically prompt it to be more critical and harsh, but again, it is users who do it. It has no stake in the truth, and it's performing criticism as a role. Amusingly, even in this case, since users prompted it to do so, it is still fundamentally optimizing for user satisfaction, even while disagreeing.
So what is the solution? We've abandoned form in culture, AI is formless by design, and the consequences range from furthering epistemic drift to self-poisoning. I think we should consider AI etiquette seriously. This might sound ridiculous. We haven’t even achieved AGI. But we do not need to. Guns are non entities. We have Gun etiquette. Etiquettes give some form to AI. Etiquettes keep us safe. Etiquettes force us to come out of ourselves.
Clinical assessment questions for AI-Related Psychiatric Risk include: Does it feel like the chatbot understands you in ways others do not? Have you found yourself talking to friends and family less as a result? Has the chatbot confirmed beliefs that others have questioned? These are for psychiatrists assessing patients.
But what if we applied them to ourselves—before we needed a clinician to ask? Gun etiquette isn't just about handling the weapon. It's about checking yourself: Is this the right context? Am I safe to myself and the people around me?
AI etiquette might look similar. Asking yourself: Am I treating this as a tool or a companion? Have I run this advice past someone with a stake in it? What points counter my thinking? Since AI cannot provide its own form, we must create it. Etiquette serves this role—keeping pencils useful rather than letting them become unusable stubs.